HMMs II - Posterior Decoding and Learning

نویسندگان

  • Manolis Kellis
  • Amer Fejzic
  • Elham Azizi
چکیده

In the last lecture we got familiar with the concept of discrete-time Markov chains and Hidden Markov Models (HMMs). A Markov chain is a discrete random process that abides by the Markov property, that the probability of the next state depends only on the current state and not the past. The Markov chain models how a state changes from step to step using transition probabilities. Therefore, a Markov Model (MM) is fully defined as:

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Sequence Annotation with HMMs: New Problems and Their Complexity

Hidden Markov models (HMMs) and their variants were successfully used for several sequence annotation tasks. Traditionally, inference with HMMs is done using the Viterbi and posterior decoding algorithms. However, recently a variety of different optimization criteria and associated computational problems were proposed. In this paper, we consider three HMM decoding criteria and prove their NP ha...

متن کامل

Reviving discrete HMMs: the myth about the superiority of continuous HMMs

Despite what is generally believed, we have recently shown that discrete-distribution HMMs can outperform continuousdensity HMMs at significantly faster decoding speeds. Recognition performance and decoding speed of the discrete HMMs are improved by using product-code Vector Quantization (VQ) and mixtures of discrete distributions. In this paper, we present efficient training and decoding algor...

متن کامل

Self-organization in mixture densities of HMM based speech recognition

In this paper experiments are presented to apply Self-Organizing Map (SOM) and Learning Vector Quantization (LVQ) for training mixture density hidden Markov models (HMMs) in automatic speech recognition. The decoding of spoken words into text is made using speaker dependent, but vocabulary and context independent phoneme HMMs. Each HMM has a set of states and the output density of each state is...

متن کامل

Constrained Kronecker Deltas for Fast Approximate Inference and Estimation

We are interested in constructing fast, principled algorithms for scaling up decoding and parameter estimation in probability models such as Hidden Markov Models (HMMs) and linear-chain Conditional Random Fields (CRFs) with large state spaces. In this paper we present a principled extension of beam search to beam inference and learning. We present a new approach for approximate inference based ...

متن کامل

KL-Divergence Guided Two-Beam Viterbi Algorithm on Factorial HMMs

This thesis addresses the problem of the high computation complexity issue that arises when decoding hidden Markov models (HMMs) with a large number of states. A novel approach, the two-beam Viterbi, with an extra forward beam, for decoding HMMs is implemented on a system that uses factorial HMM to simultaneously recognize a pair of isolated digits on one audio channel. The two-beam Viterbi alg...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2010